20 research outputs found

    Fast Uncertainty Estimation for Deep Learning Based Optical Flow

    Get PDF
    We present a novel approach to reduce the processing time required to derive the estimation uncertainty map in deep learning-based optical flow determination methods. Without uncertainty aware reasoning, the optical flow model, especially when it is used for mission critical fields such as robotics and aerospace, can cause catastrophic failures. Although several approaches such as the ones based on Bayesian neural networks have been proposed to handle this issue, they are computationally expensive. Thus, to speed up the processing time, our approach applies a generative model, which is trained by input images and an uncertainty map derived through a Bayesian approach. By using synthetically generated images of spacecraft, we demonstrate that the trained generative model can produce the uncertainty map 100∼700 times faster than the conventional uncertainty estimation method used for training the generative model itself. We also show that the quality of uncertainty map derived by the generative model is close to that of the original uncertainty map. By applying the proposed approach, the deep learning model operated in real-time can avoid disastrous failures by considering the uncertainty as well as achieving better performance removing uncertain portions of the prediction result

    Fast Uncertainty Estimation for Deep Learning Based Optical Flow

    Get PDF
    We present a novel approach to reduce the processing time required to derive the estimation uncertainty map in deep learning-based optical flow determination methods. Without uncertainty aware reasoning, the optical flow model, especially when it is used for mission critical fields such as robotics and aerospace, can cause catastrophic failures. Although several approaches such as the ones based on Bayesian neural networks have been proposed to handle this issue, they are computationally expensive. Thus, to speed up the processing time, our approach applies a generative model, which is trained by input images and an uncertainty map derived through a Bayesian approach. By using synthetically generated images of spacecraft, we demonstrate that the trained generative model can produce the uncertainty map 100∼700 times faster than the conventional uncertainty estimation method used for training the generative model itself. We also show that the quality of uncertainty map derived by the generative model is close to that of the original uncertainty map. By applying the proposed approach, the deep learning model operated in real-time can avoid disastrous failures by considering the uncertainty as well as achieving better performance removing uncertain portions of the prediction result

    Monocular-Based Pose Determination of Uncooperative Space Objects

    Get PDF
    Vision-based methods to determine the relative pose of an uncooperative orbiting object are investigated in applications to spacecraft proximity operations, such as on-orbit servicing, spacecraft formation flying, and small bodies exploration. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously in real-time by making use of only optical measurements. The Simultaneous Estimation of Pose and Shape (SEPS) algorithm that does not require a priori knowledge of the pose and shape of the target is presented. This makes use of a novel measurement equation and filter that can efficiently use optical flow information along with a star tracker to estimate the target's angular rotational and translational relative velocity as well as its center of gravity. Depending on the mission constraints, SEPS can be augmented by a more accurate offline, on-board 3D reconstruction of the target shape, which allows for the estimation of the pose as a known target. The use of Structure from Motion (SfM) for this purpose is discussed. A model-based approach for pose estimation of known targets is also presented. The architecture and implementation of both the proposed approaches are elucidated and their performance metrics are evaluated through numerical simulations by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Autonomous Small Body Mapping and Spacecraft Navigation Via Real-Time SPC-SLAM

    Get PDF
    Current methods for pose and shape estimation of small bodies, such as comets and asteroids, rely on extensive ground support and significant use of radiometric measurements using the Deep Space Network. The Stereo-Photoclinometry (SPC) technique is currently used to provide detailed topological information about a small body as well as its absolute orientation and position. While this technique has produced very accurate estimates, the core algorithm cannot be run in real-time and requires a team of scientists on the ground who must communicate with the spacecraft in order to oversee SPC operations. Autonomous onboard navigation addresses these limitations by eliminating the need for human oversight. In this paper, we present an optimization-based estimation algorithm for navigation that allows the spacecraft to autonomously approach and maneuver around an unknown small body by mapping its geometric shape, estimating its orientation, and simultaneously determining the trajectory of the center of mass of the small body. We show the effectiveness of the proposed algorithm using simulated data from a previous flight mission to Comet 67P

    Monocular-Based Pose Determination of Uncooperative Known and Unknown Space Objects

    Get PDF
    In order to support spacecraft proximity operations, such as on-orbit servicing and spacecraft formation flying, several vision-based techniques exist to determine the relative pose of an uncooperative orbiting object with respect to the spacecraft. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously by making use of only optical measurements. In this paper, we investigate two vision-based approaches for pose estimation of uncooperative orbiting targets: one that is general and versatile such that it does not require a priori knowledge of any information of the target, and the other one that requires knowledge of the target's shape geometry. The former uses an estimation algorithm of translational and rotational dynamics to sequentially perform simultaneous pose determination and 3D shape reconstruction of the unknown target, while the latter relies on a known 3D model of the target's geometry to provide a point-by-point pose solution. The architecture and implementation of both methods are presented and their achievable performance is evaluated through numerical simulations. In addition, a computer vision processing strategy for feature detection and matching and the Structure from Motion (SfM) algorithm for on-board 3D reconstruction are also discussed and validated by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Robust Estimation Framework with Semantic Measurements

    Get PDF
    Conventional simultaneous localization and mapping (SLAM) algorithms rely on geometric measurements and require loop-closure detections to correct for drift accumulated over a vehicle trajectory. Semantic measurements can add measurement redundancy and provide an alternative form of loop closure. We propose two different estimation algorithms that incorporate semantic measurements provided by vision-based object classifiers. An a priori map of regions where the objects can be detected is assumed. The first estimation framework is posed as a maximum-likelihood problem, where the likelihood function for semantic measurements is derived from the confusion matrices of the object classifiers. The second estimation framework is comprised of two parts: 1) a continuous-state estimation formulation that includes semantic measurements as a form of state constraints and 2) a discrete-state estimation formulation used to compute the certainty of object detection measurements using a Hidden Markov Model (HMM). The advantages of incorporating semantic measurements in these frameworks are demonstrated in numerical simulations. In particular, the proposed estimation algorithms improve upon the robustness and accuracy of conventional SLAM algorithms

    Monocular-Based Pose Determination of Uncooperative Space Objects

    Get PDF
    Vision-based methods to determine the relative pose of an uncooperative orbiting object are investigated in applications to spacecraft proximity operations, such as on-orbit servicing, spacecraft formation flying, and small bodies exploration. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously in real-time by making use of only optical measurements. The Simultaneous Estimation of Pose and Shape (SEPS) algorithm that does not require a priori knowledge of the pose and shape of the target is presented. This makes use of a novel measurement equation and filter that can efficiently use optical flow information along with a star tracker to estimate the target's angular rotational and translational relative velocity as well as its center of gravity. Depending on the mission constraints, SEPS can be augmented by a more accurate offline, on-board 3D reconstruction of the target shape, which allows for the estimation of the pose as a known target. The use of Structure from Motion (SfM) for this purpose is discussed. A model-based approach for pose estimation of known targets is also presented. The architecture and implementation of both the proposed approaches are elucidated and their performance metrics are evaluated through numerical simulations by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Robust Estimation Framework with Semantic Measurements

    Get PDF
    Conventional simultaneous localization and mapping (SLAM) algorithms rely on geometric measurements and require loop-closure detections to correct for drift accumulated over a vehicle trajectory. Semantic measurements can add measurement redundancy and provide an alternative form of loop closure. We propose two different estimation algorithms that incorporate semantic measurements provided by vision-based object classifiers. An a priori map of regions where the objects can be detected is assumed. The first estimation framework is posed as a maximum-likelihood problem, where the likelihood function for semantic measurements is derived from the confusion matrices of the object classifiers. The second estimation framework is comprised of two parts: 1) a continuous-state estimation formulation that includes semantic measurements as a form of state constraints and 2) a discrete-state estimation formulation used to compute the certainty of object detection measurements using a Hidden Markov Model (HMM). The advantages of incorporating semantic measurements in these frameworks are demonstrated in numerical simulations. In particular, the proposed estimation algorithms improve upon the robustness and accuracy of conventional SLAM algorithms

    Monocular-Based Pose Determination of Uncooperative Known and Unknown Space Objects

    Get PDF
    In order to support spacecraft proximity operations, such as on-orbit servicing and spacecraft formation flying, several vision-based techniques exist to determine the relative pose of an uncooperative orbiting object with respect to the spacecraft. Depending on whether the object is known or unknown, a shape model of the orbiting target object may have to be constructed autonomously by making use of only optical measurements. In this paper, we investigate two vision-based approaches for pose estimation of uncooperative orbiting targets: one that is general and versatile such that it does not require a priori knowledge of any information of the target, and the other one that requires knowledge of the target's shape geometry. The former uses an estimation algorithm of translational and rotational dynamics to sequentially perform simultaneous pose determination and 3D shape reconstruction of the unknown target, while the latter relies on a known 3D model of the target's geometry to provide a point-by-point pose solution. The architecture and implementation of both methods are presented and their achievable performance is evaluated through numerical simulations. In addition, a computer vision processing strategy for feature detection and matching and the Structure from Motion (SfM) algorithm for on-board 3D reconstruction are also discussed and validated by using a dataset of images that are synthetically generated according to a chaser/target relative motion in Geosynchronous Orbit (GEO)

    Autonomous Small Body Mapping and Spacecraft Navigation Via Real-Time SPC-SLAM

    Get PDF
    Current methods for pose and shape estimation of small bodies, such as comets and asteroids, rely on extensive ground support and significant use of radiometric measurements using the Deep Space Network. The Stereo-Photoclinometry (SPC) technique is currently used to provide detailed topological information about a small body as well as its absolute orientation and position. While this technique has produced very accurate estimates, the core algorithm cannot be run in real-time and requires a team of scientists on the ground who must communicate with the spacecraft in order to oversee SPC operations. Autonomous onboard navigation addresses these limitations by eliminating the need for human oversight. In this paper, we present an optimization-based estimation algorithm for navigation that allows the spacecraft to autonomously approach and maneuver around an unknown small body by mapping its geometric shape, estimating its orientation, and simultaneously determining the trajectory of the center of mass of the small body. We show the effectiveness of the proposed algorithm using simulated data from a previous flight mission to Comet 67P
    corecore